Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 33
1.
Cogn Res Princ Implic ; 9(1): 24, 2024 04 23.
Article En | MEDLINE | ID: mdl-38652184

With the increased sophistication of technology, humans have the possibility to offload a variety of tasks to algorithms. Here, we investigated whether the extent to which people are willing to offload an attentionally demanding task to an algorithm is modulated by the availability of a bonus task and by the knowledge about the algorithm's capacity. Participants performed a multiple object tracking (MOT) task which required them to visually track targets on a screen. Participants could offload an unlimited number of targets to a "computer partner". If participants decided to offload the entire task to the computer, they could instead perform a bonus task which resulted in additional financial gain-however, this gain was conditional on a high performance accuracy in the MOT task. Thus, participants should only offload the entire task if they trusted the computer to perform accurately. We found that participants were significantly more willing to completely offload the task if they were informed beforehand that the computer's accuracy was flawless (Experiment 1 vs. 2). Participants' offloading behavior was not significantly affected by whether the bonus task was incentivized or not (Experiment 2 vs. 3). These results combined with those from our previous study (Wahn et al. in PLoS ONE 18:e0286102, 2023), which did not include a bonus task but was identical otherwise, show that the human willingness to offload an attentionally demanding task to an algorithm is considerably boosted by the availability of a bonus task-even if not incentivized-and by the knowledge about the algorithm's capacity.


Algorithms , Humans , Adult , Male , Female , Young Adult , Psychomotor Performance/physiology , Attention/physiology , Cognition/physiology
2.
Article En | MEDLINE | ID: mdl-37684501

When acting jointly, individuals often attend and respond to the same object or spatial location in complementary ways (e.g., when passing a mug, one person grasps its handle with a precision grip; the other receives it with a whole-hand grip). At the same time, the spatial relation between individuals' actions affects attentional orienting: one is slower to attend and respond to locations another person previously acted upon than to alternate locations ("social inhibition of return", social IOR). Achieving joint goals (e.g., passing a mug), however, often requires complementary return responses to a co-actor's previous location. This raises the question of whether attentional orienting, and hence the social IOR, is affected by the (joint) goal our actions are directed at. The present study addresses this question. Participants responded to cued locations on a computer screen, taking turns with a virtual co-actor. They pursued either an individual goal or performed complementary actions with the co-actor, in pursuit of a joint goal. Four experiments showed that the social IOR was significantly modulated when participant and co-actor pursued a joint goal. This suggests that attentional orienting is affected not only by the spatial but also by the social relation between two agents' actions. Our findings thus extend research on interpersonal perception-action effects, showing that the way another agent's perceived action shapes our own depends on whether we share a joint goal with that agent.

3.
Atten Percept Psychophys ; 85(6): 1962-1975, 2023 Aug.
Article En | MEDLINE | ID: mdl-37410254

In everyday life, people often work together to accomplish a joint goal. Working together is often beneficial as it can result in a higher performance compared to working alone - a so-called "group benefit". While several factors influencing group benefits have been investigated in a range of tasks, to date, they have not been examined collectively with an integrative statistical approach such as linear modeling. To address this gap in the literature, we investigated several factors that are highly relevant for group benefits (i.e., task feedback, information about the co-actor's actions, the similarity in the individual performances, and personality traits) and used these factors as predictors in a linear model to predict group benefits in a joint multiple object tracking (MOT) task. In the joint MOT task, pairs of participants jointly tracked the movements of target objects among distractor objects and, depending on the experiment, either received group performance feedback, individual performance feedback, information about the group member's performed actions, or a combination of these types of information. We found that predictors collectively account for half of the variance and make non-redundant contributions towards predicting group benefits, suggesting that they independently influence group benefits. The model also accurately predicts group benefits, suggesting that it could be used to anticipate group benefits for individuals that have not yet performed a joint task together. Given that the investigated factors are relevant for other joint tasks, our model provides a first step towards developing a more general model for predicting group benefits across several shared tasks.


Movement , Visual Perception , Humans
4.
PLoS One ; 18(5): e0286102, 2023.
Article En | MEDLINE | ID: mdl-37205658

In the near future, humans will increasingly be required to offload tasks to artificial systems to facilitate daily as well as professional activities. Yet, research has shown that humans are often averse to offloading tasks to algorithms (so-called "algorithmic aversion"). In the present study, we asked whether this aversion is also present when humans act under high cognitive load. Participants performed an attentionally demanding task (a multiple object tracking (MOT) task), which required them to track a subset of moving targets among distractors on a computer screen. Participants first performed the MOT task alone (Solo condition) and were then given the option to offload an unlimited number of targets to a computer partner (Joint condition). We found that participants significantly offloaded some (but not all) targets to the computer partner, thereby improving their individual tracking accuracy (Experiment 1). A similar tendency for offloading was observed when participants were informed beforehand that the computer partner's tracking accuracy was flawless (Experiment 2). The present findings show that humans are willing to (partially) offload task demands to an algorithm to reduce their own cognitive load. We suggest that the cognitive load of a task is an important factor to consider when evaluating human tendencies for offloading cognition onto artificial systems.


Algorithms , Cognition , Humans , Computers , Affect
5.
Psychol Res ; 87(5): 1323-1333, 2023 Jul.
Article En | MEDLINE | ID: mdl-36378344

When looking for a certain object or person, individuals often engage in collaborative visual search, i.e., they search together by coordinating their behavior. For instance, when parents are looking for their child on a busy playground, they might search collaboratively by dividing the search area. This type of labor division in collaborative visual search could be beneficial not only in daily life, but also in professional life (e.g., at airport security screening, lifeguarding, or diagnostic radiology). To better understand the mechanisms underlying this type of collaborative behavior, as well as its benefits and costs, researchers have studied visual search scenarios in the laboratory. The aim of this review article is to provide a brief overview of the results of these studies. Are individuals faster if they search together compared to alone? And if so, should they simply search in parallel, or will they benefit from agreeing on a specific labor division? How should they divide the search space, and how to communicate this division? Should a consensus be reached (target present or absent?) before ending the search? We address these and further key questions, focusing on the aspect of labor division. In conclusion, we integrate the reviewed findings into an applied context, point out which questions still remain, and put forward suggestions for future research. We hope that this review can serve not only as a theoretical foundation for basic research but also as a practical inspiration for applied research and development.


Child , Humans
6.
Psychol Res ; 86(6): 1930-1943, 2022 Sep.
Article En | MEDLINE | ID: mdl-34854983

Eye contact is a dynamic social signal that captures attention and plays a critical role in human communication. In particular, direct gaze often accompanies communicative acts in an ostensive function: a speaker directs her gaze towards the addressee to highlight the fact that this message is being intentionally communicated to her. The addressee, in turn, integrates the speaker's auditory and visual speech signals (i.e., her vocal sounds and lip movements) into a unitary percept. It is an open question whether the speaker's gaze affects how the addressee integrates the speaker's multisensory speech signals. We investigated this question using the classic McGurk illusion, an illusory percept created by presenting mismatching auditory (vocal sounds) and visual information (speaker's lip movements). Specifically, we manipulated whether the speaker (a) moved his eyelids up/down (i.e., open/closed his eyes) prior to speaking or did not show any eye motion, and (b) spoke with open or closed eyes. When the speaker's eyes moved (i.e., opened or closed) before an utterance, and when the speaker spoke with closed eyes, the McGurk illusion was weakened (i.e., addressees reported significantly fewer illusory percepts). In line with previous research, this suggests that motion (opening or closing), as well as the closed state of the speaker's eyes, captured addressees' attention, thereby reducing the influence of the speaker's lip movements on the addressees' audiovisual integration process. Our findings reaffirm the power of speaker gaze to guide attention, showing that its dynamics can modulate low-level processes such as the integration of multisensory speech signals.


Illusions , Speech Perception , Attention , Female , Humans , Lip , Speech , Visual Perception
7.
J Exp Psychol Hum Percept Perform ; 47(9): 1166-1181, 2021 Sep.
Article En | MEDLINE | ID: mdl-34694847

People often perform visual tasks together, for example, when looking for a misplaced key. When performing such tasks jointly, people coordinate their actions to divide the labor, for example, by looking for the misplaced key in different rooms. This way, they tend to perform better together than individually-they attain a group benefit. A crucial factor determining whether (and to what extent) individuals attain a group benefit is the amount of information they receive about each other's actions and performance. We systematically varied, across 8 conditions, the information participant pairs received while jointly performing a visual task. We find that participants can attain a group benefit without receiving any information (and thus cannot coordinate their actions). However, actions are coordinated and the group benefit is enhanced if participants receive information about each other's actions or performance. If both types of information are received, participants are faster in creating efficient labor divisions. To create divisions, participants used the screen center as a reference to divide the labor into a left and right side. When participants cannot coordinate actions, they exhibit a bias toward choosing the same side, but they forgo this bias once action coordination is possible, thereby boosting group performance. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

8.
Atten Percept Psychophys ; 83(8): 3056-3068, 2021 Nov.
Article En | MEDLINE | ID: mdl-34561815

Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration.


Illusions , Acoustic Stimulation , Attention , Auditory Perception , Humans , Photic Stimulation , Visual Perception
9.
Acta Psychol (Amst) ; 215: 103291, 2021 Apr.
Article En | MEDLINE | ID: mdl-33770664

Humans often perform visual tasks together, and when doing so, they tend to devise division of labor strategies to share the load. Implementing such strategies, however, is effortful as co-actors need to coordinate their actions. We tested if pupil size - a physiological correlate of mental effort - can detect such a coordination effort in a multiple object tracking task (MOT). Participants performed the MOT task jointly with a computer partner and either devised a division of labor strategy (main experiment) or the labor division was already pre-determined (control experiment). We observed that pupil sizes increase relative to performing the MOT task alone in the main experiment while this is not the case in the control experiment. These findings suggest that pupil size can detect a rise in coordination effort, extending the view that pupil size indexes mental effort across a wide range of cognitively demanding tasks.


Computers , Pupil , Humans
10.
Vision Res ; 182: 1-8, 2021 05.
Article En | MEDLINE | ID: mdl-33550023

While passive social information (e.g. pictures of people) routinely draws one's eyes, our willingness to look at live others is more nuanced. People tend not to stare at strangers and will modify their gaze behaviour to avoid sending undesirable social signals; yet they often continue to monitor others covertly "out of the corner of their eyes." What this means for looks that are being made near to live others is unknown. Will the eyes be drawn towards the other person, or pushed away? We evaluate changes in two elements of gaze control: image-independent principles guiding how people look (e.g. biases to make eye movements along the cardinal directions) and image-dependent principles guiding what people look at (e.g. a preference for meaningful content within a scene). Participants were asked to freely view semantically unstructured (fractals) and semantically structured (rotated landscape) images, half of which were located in the space near to a live other. We found that eye movements were horizontally displaced away from a visible other starting at 1032 ms after stimulus onset when fractals but not landscapes were viewed. We suggest that the avoidance of looking towards live others extends to the near space around them, at least in the absence of semantically meaningful gaze targets.


Eye Movements , Eye , Fixation, Ocular , Humans
11.
Atten Percept Psychophys ; 83(1): 1-6, 2021 Jan.
Article En | MEDLINE | ID: mdl-33230733

Focusing attention is a key cognitive skill, but how the gaze of others affects engaged attention remains relatively unknown. We investigated if participants' attentional bias toward a location is modulated by the number of people gazing toward or away from it. We presented participants with a nonpredictive directional cue that biased attention towards a specific location. Then, any number of four stimulus faces turned their gaze toward or away from the attended location. When all the faces looked at the attended location participants increased their commitment to it, and response time to targets at that location were speeded. When most or all of the faces looked away from the attended location, attention was withdrawn, and response times were slowed. This study reveals that the gaze of others can penetrate one's ability to focus attention, which in turn can be both beneficial and costly to one's responses to events in the environment.


Attentional Bias , Attention , Cues , Fixation, Ocular , Humans , Reaction Time
12.
Acta Psychol (Amst) ; 212: 103205, 2021 Jan.
Article En | MEDLINE | ID: mdl-33202313

In the near future humans will increasingly be required to cooperate and share task load with artificial agents in joint tasks as they will be able to greatly assist humans in various types of tasks and contexts. In the present study, we investigated humans' willingness to share task load with a computer partner in a joint visuospatial task. The partner was described as either behaving in a human-like or machine-like way and followed a pre-defined behaviour that was either human-like or non-human-like. We found that participants successfully shared task load when the partner behaved in a human-like way. Critically, the successful collaboration was sustained throughout the experiment only when the partner was also described as behaving in a human-like way beforehand. These findings suggest that not only the behaviour of a computer partner but also the prior description of the partner is a critical factor influencing humans' willingness to share task load.


Cooperative Behavior , Social Behavior , Computers , Humans , User-Computer Interface
13.
Q J Exp Psychol (Hove) ; 73(12): 2260-2271, 2020 Dec.
Article En | MEDLINE | ID: mdl-32698727

Our senses are stimulated continuously. Through multisensory integration, different sensory inputs may or may not be combined into a unitary percept. Simultaneous with this stimulation, people are frequently engaged in social interactions, but how multisensory integration and social processing interact is largely unknown. The present study investigated if, and how, the multisensory sound-induced flash illusion is affected by a social manipulation. In the sound-induced flash illusion, a participant typically receives one visual flash and two auditory beeps and she or he is required to indicate the number of flashes that were perceived. Often, the auditory beeps alter the perception of the flashes such that a participant tends to perceive two flashes instead of one flash. We tested whether performing a flash counting task with a partner (confederate), who was required to indicate the number of presented beeps, would modulate this illusion. We found that the sound-induced flash illusion was perceived significantly more often when the flash counting task was performed with the confederate compared with performing it alone. Yet, we no longer find this effect if visual access between the two individuals is prevented. These findings, combined with previous results, suggest that performing a multisensory task jointly-in this case an audiovisual task-lowers the extent to which an individual attends to visual information, which in turn affects the multisensory integration process.


Illusions , Acoustic Stimulation , Auditory Perception , Female , Humans , Male , Photic Stimulation , Visual Perception
14.
Atten Percept Psychophys ; 82(6): 3085-3095, 2020 Aug.
Article En | MEDLINE | ID: mdl-32435973

In daily life, humans frequently perform visuospatial tasks together (e.g., visual search) and distribute the labor in such tasks. Previous research has shown that humans prefer a left and right labor division in a joint multiple object tracking (MOT) task. Yet, findings from studies investigating individuals' tracking ability suggest attentional capacities may be more maximally used with a top and bottom labor division. We investigated whether co-actors' labor division preference is influenced by how they are seated (neighboring vs. opposite of each other) or how the MOT task is displayed (portrait vs. landscape). We find that pairs attain a higher performance using a top and bottom labor division and preferred this labor division compared to a left and right division. This preference was unaffected by the seating arrangement. For the landscape display, however, we find that participants no longer attain a higher performance for the top and bottom labor division and accordingly participants' preference for this labor division was greatly reduced as well. Overall, we propose that co-actors are sensitive to changes within their environment, which allows them to choose a labor division that maximizes use of their individual attentional capacities.


Attention , Joints , Humans
15.
Front Psychol ; 11: 79, 2020.
Article En | MEDLINE | ID: mdl-32116905

In daily life, humans constantly process information from multiple sensory modalities (e.g., visual and auditory). Information across sensory modalities may (or may not) be combined to form the perception of a single event via the process of multisensory integration. Recent research has suggested that performing a spatial crossmodal congruency task jointly with a partner affects multisensory integration. To date, it has not been investigated whether multisensory integration in other crossmodal tasks is also affected by performing a task jointly. To address this point, we investigated whether joint task performance also affects perceptual judgments in a crossmodal motion discrimination task and a temporal order judgment task. In both tasks, pairs of participants were presented with auditory and visual stimuli that might or might not be perceived as belonging to a single event. Each participant in a pair was required to respond to stimuli from one sensory modality only (e.g., visual stimuli only). Participants performed both individual and joint conditions. Replicating earlier multisensory integration effects, we found that participants' perceptual judgments were significantly affected by stimuli in the other modality for both tasks. However, we did not find that performing a task jointly modulated these crossmodal effects. Taking this together with earlier findings, we suggest that joint task performance affects crossmodal results in a manner dependent on how these effects are quantified (i.e., via responses time or accuracy) and the specific task demands (i.e., whether tasks require processing stimuli in terms of location, motion, or timing).

16.
Atten Percept Psychophys ; 82(5): 2415-2433, 2020 Jul.
Article En | MEDLINE | ID: mdl-31989452

In daily life, humans often perform visual tasks, such as solving puzzles or searching for a friend in a crowd. Performing these visual searches jointly with a partner can be beneficial: The two task partners can devise effective division of labor strategies and thereby outperform individuals who search alone. To date, it is unknown whether these group benefits scale up to triads or whether the cost of coordinating with others offsets any potential benefit for group sizes above two. To address this question, we compare participants' performance in a visual search task that they perform either alone, in dyads, or in triads. When the search task is performed jointly, co-actors receive information about each other's gaze location. After controlling for speed-accuracy trade-offs, we found that triads searched faster than dyads, suggesting that group benefits do scale up to triads. Moreover, we found that the triads' divided the search space in accordance with the co-actors' individual search performances but searched less efficiently than dyads. We also present a linear model to predict group benefits, which accounts for 70% of the variance. The model includes our experimental factors and a set of non-redundant predictors, quantifying the similarities in the individual performances, the collaboration between co-actors, and the estimated benefits that co-actors would attain without collaborating. Overall, the present study demonstrates that group benefits scale up to larger group sizes, but the additional gains are attenuated by the increased costs associated with devising effective division of labor strategies.


Group Processes , Social Behavior , Humans
17.
Eur J Neurosci ; 51(7): 1676-1696, 2020 04.
Article En | MEDLINE | ID: mdl-31418946

Humans frequently perform tasks collaboratively in daily life. Collaborating with others may or may not result in higher task performance than if one were to complete the task alone (i.e., a collective benefit). A recent study on collective benefits in perceptual decision-making showed that dyad members with similar individual performances attain collective benefit. However, little is known about the physiological basis of these results. Here, we replicate this earlier work and also investigate the neurophysiological correlates of decision-making using EEG. In a two-interval forced-choice task, co-actors individually indicated presence of a target stimulus with a higher contrast and then indicated their confidence on a rating scale. Viewing the individual ratings, dyads made a joint decision. Replicating earlier work, we found a positive correlation between the similarity of individual performances and collective benefit. We analyzed event-related potentials (ERPs) in three phases (i.e., stimulus onset, response and feedback) using explorative cluster mass permutation tests. At stimulus onset, ERPs were significantly linearly related to our manipulation of contrast differences, validating our manipulation of task difficulty. For individual and joint responses, we found a significant centro-parietal error-related positivity for correct versus incorrect responses, which suggests that accuracy is already evaluated at the response level. At feedback presentation, we found a significant late positive fronto-central potential elicited by incorrect joint responses. In sum, these results demonstrate that response- and feedback-related components elicited by an error-monitoring system differentially integrate conflicting information exchanged during the joint decision-making process.


Decision Making , Evoked Potentials , Electroencephalography , Humans , Task Performance and Analysis
18.
Multisens Res ; 32(2): 145-163, 2019 01 01.
Article En | MEDLINE | ID: mdl-31059470

Human information processing is limited by attentional resources. That is, via attentional mechanisms humans select information that is relevant for their goals, and discard other information. While limitations of attentional processing have been investigated extensively in each sensory modality, there is debate as to whether sensory modalities access shared resources, or if instead distinct resources are dedicated to individual sensory modalities. Research addressing this question has used dual task designs, with two tasks performed either in a single sensory modality or in two separate modalities. The rationale is that, if two tasks performed in separate sensory modalities interfere less or not at all compared to two tasks performed in the same sensory modality, then attentional resources are distinct across the sensory modalities. If task interference is equal regardless of whether tasks are performed in separate sensory modalities or the same sensory modality, then attentional resources are shared across the sensory modalities. Due to their complexity, dual task designs face many methodological difficulties. In the present review, we discuss potential confounds and countermeasures. In particular, we discuss 1) compound interference measures to circumvent problems with participants dividing attention unequally across tasks, 2) staircase procedures to match difficulty levels of tasks and counteracting problems with interpreting results, 3) choosing tasks that continuously engage participants to minimize issues arising from task switching, and 4) reducing motor demands to avoid sources of task interference, which are independent of the involved sensory modalities.


Attention/physiology , Guidelines as Topic , Psychomotor Performance/physiology , Visual Perception/physiology , Humans , Reaction Time
19.
Front Psychol ; 10: 361, 2019.
Article En | MEDLINE | ID: mdl-30858814

Humans achieve their goals in joint action tasks either by cooperation or competition. In the present study, we investigated the neural processes underpinning error and monetary rewards processing in such cooperative and competitive situations. We used electroencephalography (EEG) and analyzed event-related potentials (ERPs) triggered by feedback in both social situations. 26 dyads performed a joint four-alternative forced choice (4AFC) visual task either cooperatively or competitively. At the end of each trial, participants received performance feedback about their individual and joint errors and accompanying monetary rewards. Furthermore, the outcome, i.e., resulting positive, negative, or neutral rewards, was dependent on the pay-off matrix, defining the social situation either as cooperative or competitive. We used linear mixed effects models to analyze the feedback-related-negativity (FRN) and used the Threshold-free cluster enhancement (TFCE) method to explore activations of all electrodes and times. We found main effects of the outcome and social situation, but no interaction at mid-line frontal electrodes. The FRN was more negative for losses than wins in both social situations. However, the FRN amplitudes differed between social situations. Moreover, we compared monetary with neutral outcomes in both social situations. Our exploratory TFCE analysis revealed that processing of feedback differs between cooperative and competitive situations at right temporo-parietal electrodes where the cooperative situation elicited more positive amplitudes. Further, the differences induced by the social situations were stronger in participants with higher scores on a perspective taking test. In sum, our results replicate previous studies about the FRN and extend them by comparing neurophysiological responses to positive and negative outcomes in a task that simultaneously engages two participants in competitive and cooperative situations.

20.
Front Psychol ; 9: 918, 2018.
Article En | MEDLINE | ID: mdl-29930528

In daily life, humans frequently engage in object-directed joint actions, be it carrying a table together or jointly pulling a rope. When two or more individuals control an object together, they may distribute control by performing complementary actions, e.g., when two people hold a table at opposite ends. Alternatively, several individuals may execute control in a redundant manner by performing the same actions, e.g., when jointly pulling a rope in the same direction. Previous research has investigated whether dyads can outperform individuals in tasks where control is either distributed or redundant. The aim of the present review is to integrate findings for these two types of joint control to determine common principles and explain differing results. In sum, we find that when control is distributed, individuals tend to outperform dyads or attain similar performance levels. For redundant control, conversely, dyads have been shown to outperform individuals. We suggest that these differences can be explained by the possibility to freely divide control: Having the option to exercise control redundantly allows co-actors to coordinate individual contributions in line with individual capabilities, enabling them to maximize the benefit of the available skills in the group. In contrast, this freedom to adopt and adapt customized coordination strategies is not available when the distribution of control is determined from the outset.

...